We use cookies on this site to enhance your experience.
By selecting “Accept” and continuing to use this website, you consent to the use of cookies.
Originally published in November 2024 | Research Note contributed by Brandon Mattalo and Rima Khatib, Lazaridis School of Business and Economics
The study of artificial intelligence in education (AIEd) is not new. One aspect of AIEd research focuses on “Learning with AI,” which investigates how educators can use AI to improve student learning outcomes and assist instructors with delivering their courses (Holmes 2022, 19).
Historically, this research and use of AI in education has been limited by the available technologies (Mattalo 2024, 54). However, since the release of GPT 3.5 to the public, there has been a renewed interest in the field. The capabilities of text-based generative AI systems enable new use-cases for AI in education that were not possible with the former technologies, such as easy training on course materials, summarizing text, providing personalized tutoring, generating exam questions, evaluating student answers, and more. While the technology is not perfect and has well-documented faults, the expectation is that the technology will continue to improve, although such improvements are unlikely to be linear or exponential.
AIEd also comes with risks. Within the educational context, the main risk is students using Gen AI to circumvent the learning and evaluation processes by, amongst other things, engaging in academic misconduct. In a recent survey, KPMG estimated that approximately 60 percent of students in Canada are using Gen AI in their schoolwork, and 80 percent of those students are passing off Gen AI’s work products as their own (KPMG 2024). This is not surprising. Gen AI tools are free and accessible, and without clear direction from instructors on how to use these tools, students default to using Gen AI to minimize the amount of effort they need to achieve good grades.
Since students are already using Gen AI, instructors may want to reckon with its existence, understand what factors encourage students to use Gen AI properly, and consider whether its use has the potential to improve learning outcomes.
In our study, we set out to learn how students perceive the use of a Gen AI tutor in a university lab, how these perceptions would impact its potential adoption, and how its use would impact students’ learning outcomes.
There are two well-known theoretical frameworks we used as the basis for our study. The first, drawn from pedagogical research, is Biggs’ 3P Model of Teaching and Learning (Biggs 1993), which suggests that learning outcomes (i.e., product) are a bidirectional function of students’ perceptions of the learning context and educators’ design of teaching contexts (presage), as well as students’ learning process (process). The second framework, drawn from information systems research, is the Technology Acceptance Model (TAM), which explains the factors that contribute to users’ decisions to adopt new technologies (Davis 1989).
With REB approval, our study was conducted in a live course during the Winter 2024 semester (MB115, an Information Technology course taught by Rima Khatib):
Students who participated in the study and used ChatGPT tutorials spent an average of 39.8 minutes per week to assist them with their lab work.
Our findings show that students’ positive attitudes towards Gen AI and their perceptions of its usefulness strongly predicted whether they perceived Gen AI as enhancing their learning process. On the other hand, students’ perceptions about how easy it was to use Gen AI did not have a significant impact on their perception of its enhancement of their learning process.
Pursuant to TAM, these results suggest that students’ positive attitudes towards Gen AI and their perception of Gen AI’s usefulness are significant factors that may impact students’ decision to adopt a Gen AI tutor, while students’ perceptions about its ease of use are unlikely to impact this adoption decision.
R Squared | Beta | Results | |
---|---|---|---|
Perceived Usefulness (PU) | 0.62 | 0.62 0.57 (p<0.001) | Significant |
Attitude (ATT) | 0.32 (p=0.01) | Significant | |
Perceived Ease of Use (PEOU) | 0.04 (p=0.7) | Not Significant |
Therefore, this research suggests that if instructors choose to encourage students to adopt Gen AI in a course, they might want to focus on educating students about the usefulness of Gen AI and fostering a positive attitude towards it in the classroom.
The next aspect of our study looked at the use of Gen AI on student outcomes. These results showed that students who used Gen AI in an excellent way performed well in the labs, scoring an average of 9.24 out of 10 on the lab assessments. We defined “excellent” as students who fully followed the instructions for using Gen AI in the lab and asked personalized follow-up questions to further learn the materials required for the lab. Students whose quality of use of Gen AI ranged from good to excellent also performed well, scoring an average of 8.95 out of 10 on the lab assessments.
In comparison, students who had a low-quality use of Gen AI (and who did not show any evidence of cheating) scored an average of 7.26 out of 10 on the lab assessments. The data showed that students who did not use Gen AI in the tutorials scored an average of 7.9 out of 10. Altogether, these results suggest that proper use of Gen AI in a lab environment may positively impact learning outcomes.
However, our study also showed that students who used Gen AI to circumvent the learning process by cheating performed well on the labs, scoring an average of 9.04 out of 10.
These results suggest that the effective use of a Gen AI tutor in a lab setting could potentially improve student outcomes, but similar outcomes can also be achieved by using the same tool to cheat. This implies that instructors may want to (a) design learning tools that include Gen AI, (b) ensure that these tools are purposively created to achieve desired learning outcomes, (c) educate and encourage students to use them effectively, while (d) also designing assessments that prevent the use of Gen AI to circumvent the intended learning outcomes.
There are limitations to our research, including the presence of potential confounding variables and the constraints of our study design, which were bounded by ethical concerns about restricting access to resources for some students while providing access to others. However, this research provides an initial suggestion that effective use of Gen AI tutorials in a university course can improve student outcomes while also highlighting the risk of allowing students to use Gen AI to circumvent the learning process.